Conversation
Introduces PipelineSession, FixQueueItem, SessionState, FixStatus dataclasses and a SessionStore that persists/loads session JSON under <root>/.refactron/sessions/. 8 TDD tests all green, flake8 clean. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… ID mismatch warning Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Implements RefactronPipeline in refactron/core/pipeline.py connecting the analyze → fix queue → verify → write pipeline with session persistence. Adds 6 passing tests in tests/test_pipeline_orchestrator.py. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Wire refactron analyze to create and persist a PipelineSession from the existing AnalysisResult (no re-analysis), print the session ID at the bottom of output, and optionally queue matching issues via --fix-on. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Wire the autofix command to the RefactronPipeline: supports --session <id> to resume a queued session, or falls back to fresh analysis when only a target path is given. Updates autofix tests to mock the pipeline. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Implements Task 5 of the connected pipeline: a `refactron status` CLI command that reads PipelineSession state from SessionStore and displays analysis summary, fix queue progress, and next-step hints. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds --pipeline-session option to refactron rollback command, enabling rollback of a specific pipeline session tracked by PipelineSession/SessionStore. Updates session state to ROLLED_BACK after successful restore. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Implements the `refactron run` CLI command that chains analyze → queue → apply in a single invocation, with --dry-run, --fix-on, --no-verify, and --fail-on gates. Registers the command in main.py and adds 3 passing tests. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds tests/test_pipeline_integration.py with 4 end-to-end tests covering analyze → queue → apply roundtrip, session persistence, blocked-fix recording, and min_level filtering. Applies black/isort formatting to all connected-pipeline source and test files to satisfy the linting gate. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 22 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds a session-driven pipeline: new RefactronPipeline, PipelineSession and SessionStore persistence, session-aware queue/apply semantics, new CLI commands ( Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI
participant Pipeline
participant SessionStore
participant FixEngine
participant FileSystem
User->>CLI: refactron analyze --fix-on ERROR
CLI->>Pipeline: analyze(target)
Pipeline->>FileSystem: scan files / detect issues
Pipeline->>SessionStore: save(session {ANALYZED})
SessionStore-->>Pipeline: session_id
Pipeline-->>CLI: session_id
CLI->>Pipeline: queue_issues(session, issues[, min_level])
Pipeline->>SessionStore: update(session.fix_queue)
CLI->>User: print session_id and queued count
User->>CLI: refactron run/autofix --session <id> --dry-run?
CLI->>SessionStore: load(session_id) / load_current()
SessionStore-->>CLI: session (with fix_queue)
CLI->>Pipeline: apply(session, dry_run, verify)
Pipeline->>FixEngine: fix_file(file, issues_for_file)
FixEngine->>FileSystem: read/write/backup files
Pipeline->>SessionStore: save(updated session {FIXED/APPLIED/BLOCKED})
CLI->>User: print applied/blocked/skipped counts and session id
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Introduces a “connected pipeline” concept by adding a persisted PipelineSession model and a new RefactronPipeline orchestrator, then wiring CLI commands to create, inspect, and run sessions end-to-end.
Changes:
- Added
PipelineSession/SessionStoreJSON persistence under<project_root>/.refactron/sessions/. - Implemented
RefactronPipeline(analyze → queue → apply) and added new CLI commandsstatusandrun. - Updated
analyzeandautofixCLIs plus added tests covering session persistence and new commands.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_pipeline_session.py | New unit tests for session model serialization and SessionStore. |
| tests/test_pipeline_orchestrator.py | New tests for RefactronPipeline analyze/queue/apply orchestration behavior. |
| tests/test_pipeline_integration.py | New end-to-end style tests for analyze→queue→apply flows and edge cases. |
| tests/test_cli.py | Updates autofix CLI tests to mock the new pipeline usage. |
| tests/test_cli_status.py | New tests for refactron status command output and options. |
| tests/test_cli_run.py | New tests for refactron run command behavior and exit codes. |
| refactron/core/pipeline.py | New pipeline orchestrator for session-aware analysis and autofix application. |
| refactron/core/pipeline_session.py | New persisted session state model + JSON store utilities. |
| refactron/cli/status.py | New CLI command to display latest/specific session and fix counts. |
| refactron/cli/run.py | New one-shot CLI command to analyze, queue, apply, and optionally fail CI. |
| refactron/cli/refactor.py | Adds session-aware autofix behavior and pipeline-session rollback support. |
| refactron/cli/main.py | Registers the new status and run commands. |
| refactron/cli/analysis.py | Emits a persisted pipeline session after analysis (optionally auto-queueing fixes). |
Comments suppressed due to low confidence (1)
refactron/cli/refactor.py:360
- The new session-aware branch returns at line 355, leaving the entire original autofix implementation (setup/config/safety mapping, etc.) unreachable dead code. This is error-prone (future edits may land in the dead branch) and also makes options like
--safety-levelappear implemented but effectively ignored. Please remove the unreachable block or refactor so there is a single coherent codepath.
if dry_run:
console.print("\n[bold]Dry-run complete[/bold]")
for _item in _pipeline_session.fix_queue:
if _item.diff:
console.print(
f"\n [cyan]{_item.file_path}:{_item.line_number}[/cyan] {_item.message}"
)
console.print(_item.diff)
else:
console.print(f"\n[bold green]Applied:[/bold green] {_applied}")
if _blocked:
console.print(f"[bold red]Blocked:[/bold red] {_blocked}")
if _skipped:
console.print(f"[dim]Skipped: {_skipped}[/dim]")
console.print(f"\n[dim]Session: {_pipeline_session.session_id}[/dim]")
if _applied > 0:
console.print(
f"[dim]To undo: refactron rollback --session "
f"{_pipeline_session.session_id}[/dim]"
)
return
# Setup
target_path = _validate_path(target)
_load_config(config, profile, environment)
_print_file_count(target_path)
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -284,6 +294,66 @@ def autofix( | |||
| _auth_banner("Auto-fix") | |||
| console.print() | |||
|
|
|||
| # ── Session-aware pipeline ──────────────────────────────────────── | |||
| from refactron.core.pipeline import RefactronPipeline | |||
|
|
|||
| _target_path = Path(target) if target else Path.cwd() | |||
| _project_root = _target_path if _target_path.is_dir() else _target_path.parent | |||
| _pipeline = RefactronPipeline(project_root=_project_root) | |||
|
|
|||
| if session_id: | |||
| _pipeline_session = _pipeline.store.load(session_id) | |||
| if _pipeline_session is None: | |||
| console.print(f"[red]Session not found: {session_id}[/red]") | |||
| raise SystemExit(1) | |||
| _queued = len([i for i in _pipeline_session.fix_queue if i.status.value == "pending"]) | |||
| console.print( | |||
| f"[dim]Loaded session {session_id} · {_queued} items queued[/dim]" | |||
| ) | |||
| else: | |||
| if not target: | |||
| console.print("[red]Provide --session <id> or a target path.[/red]") | |||
| raise SystemExit(1) | |||
| console.print("[dim]No session — running fresh analysis...[/dim]") | |||
| _pipeline_session = _pipeline.analyze(_target_path) | |||
| if _pipeline._last_result: | |||
| _all_issues = [ | |||
| i for fm in _pipeline._last_result.file_metrics for i in fm.issues | |||
| ] | |||
| _pipeline.queue_issues(_pipeline_session, _all_issues) | |||
|
|
|||
| _pipeline.apply( | |||
| _pipeline_session, | |||
| dry_run=dry_run, | |||
| verify=verify, | |||
| ) | |||
There was a problem hiding this comment.
autofix currently ignores the --preview/--apply flag: it always calls _pipeline.apply(..., dry_run=dry_run) regardless of preview. Since preview defaults to True and dry_run defaults to False, running refactron autofix <target> would write changes by default. Please gate writes on preview (e.g., treat preview as dry-run) so the default behavior remains non-destructive unless --apply is explicitly provided.
|
|
||
| _target_path = Path(target) if target else Path.cwd() | ||
| _project_root = _target_path if _target_path.is_dir() else _target_path.parent | ||
| _pipeline = RefactronPipeline(project_root=_project_root) |
There was a problem hiding this comment.
autofix accepts --safety-level, but the session-aware implementation constructs RefactronPipeline(project_root=_project_root) without passing the chosen risk level to the underlying AutoFixEngine. This makes --safety-level a no-op for the new flow. Please map the CLI string to FixRiskLevel and pass it into RefactronPipeline(..., safety_level=...) (or otherwise enforce the safety level).
| _pipeline = RefactronPipeline(project_root=_project_root) | |
| _safety_level_map = { | |
| "conservative": FixRiskLevel.CONSERVATIVE, | |
| "moderate": FixRiskLevel.MODERATE, | |
| "aggressive": FixRiskLevel.AGGRESSIVE, | |
| } | |
| _selected_safety_level = _safety_level_map.get( | |
| str(safety_level).lower(), FixRiskLevel.MODERATE | |
| ) | |
| _pipeline = RefactronPipeline( | |
| project_root=_project_root, safety_level=_selected_safety_level | |
| ) |
| console.print(f"\n[dim]Session: {_pipeline_session.session_id}[/dim]") | ||
| if _applied > 0: | ||
| console.print( | ||
| f"[dim]To undo: refactron rollback --session " |
There was a problem hiding this comment.
In the non-dry-run summary, the rollback hint uses refactron rollback --session <id>, but the pipeline-session rollback option added below is --pipeline-session. As written, the printed command will not roll back the pipeline session. Please update the hint to match the actual option name for pipeline sessions.
| f"[dim]To undo: refactron rollback --session " | |
| f"[dim]To undo: refactron rollback --pipeline-session " |
refactron/cli/refactor.py
Outdated
| if pipeline_session_id: | ||
| from refactron.core.backup import BackupManager | ||
| from refactron.core.pipeline_session import SessionState, SessionStore | ||
|
|
||
| _store = SessionStore(root_dir=Path.cwd()) | ||
| _pipeline_session = _store.load(pipeline_session_id) | ||
| if _pipeline_session is None: | ||
| console.print(f"[red]Session not found: {pipeline_session_id}[/red]") | ||
| raise SystemExit(1) | ||
| if not _pipeline_session.applied_fixes: | ||
| console.print("[yellow]No applied fixes in this session to roll back.[/yellow]") | ||
| return | ||
| if not _pipeline_session.backup_session_id: | ||
| console.print("[red]Session has no backup ID — cannot roll back.[/red]") | ||
| raise SystemExit(1) | ||
|
|
||
| _bm = BackupManager(root_dir=Path.cwd()) | ||
| _restored_count, _failed = _bm.rollback_session(_pipeline_session.backup_session_id) |
There was a problem hiding this comment.
Pipeline-session rollback loads sessions from SessionStore(root_dir=Path.cwd()), which will fail if the user runs the command from a different directory than the project root where the session was saved. Consider accepting/using a --project-root option (like refactron status does) or searching upward for .refactron/sessions similarly to how the backup rollback code searches for .refactron/backups.
| def _find_fixer_name(self, issue: CodeIssue) -> Optional[str]: | ||
| if not self.fix_engine.can_fix(issue): | ||
| return None | ||
| for name, fixer in self.fix_engine.fixers.items(): | ||
| try: | ||
| result = fixer.preview(issue, "x = 1\n") | ||
| if result.success: | ||
| return name | ||
| except Exception: | ||
| continue | ||
| return None |
There was a problem hiding this comment.
_find_fixer_name calls self.fix_engine.can_fix(issue), but AutoFixEngine.can_fix only returns True when issue.rule_id matches a fixer name. Analyzer rule IDs (e.g., S004, S006, etc.) generally won’t match fixer names (e.g., extract_magic_numbers, remove_unused_imports), so this will cause nearly everything to be marked SKIPPED and nothing queued. Please introduce an explicit mapping from analyzer rule_id → fixer key (and/or change the selection logic) so queueing works with analyzer-produced issues.
| file_path=file_path, | ||
| line_number=item.line_number, | ||
| column=0, | ||
| suggestion=None, |
There was a problem hiding this comment.
Even when a FixQueueItem has a fixer_name, _queue_items_to_issues reconstructs CodeIssue objects without setting rule_id (and also drops metadata). Since AutoFixEngine.fix_file only applies fixes when issue.rule_id matches a registered fixer, fix_file will skip all issues and diff will always be empty/None. Please propagate the selected fixer key into CodeIssue.rule_id (and preserve any needed metadata) when converting queue items back into issues.
| suggestion=None, | |
| suggestion=None, | |
| rule_id=item.fixer_name, | |
| metadata=getattr(item, "metadata", None), |
| backup_session_id: Optional[str] = None | ||
| if not dry_run and files_to_fix: | ||
| backup_session_id = self.backup_manager.create_backup_session( | ||
| description=f"autofix session {session.session_id}" | ||
| ) | ||
| session.backup_session_id = backup_session_id | ||
|
|
There was a problem hiding this comment.
apply() creates a backup session ID but never actually backs up any files before writing changes. BackupManager.create_backup_session() only creates the session directory/index entry; without calling backup_file(s) for the files being modified, rollback will have nothing to restore. Please back up all files in files_to_fix before invoking fix_engine.fix_file when dry_run is False.
| for file_path_str, items in files_to_fix.items(): | ||
| file_path = Path(file_path_str) | ||
| if not file_path.exists(): | ||
| for item in items: | ||
| item.status = FixStatus.BLOCKED | ||
| item.block_reason = "file not found" | ||
| session.blocked_fixes.append(item) | ||
| continue |
There was a problem hiding this comment.
apply() derives file_path = Path(file_path_str) directly from the stored string. If sessions persist relative paths (common when target is passed as a relative path), resuming a session from a different working directory will break (files appear missing → fixes get BLOCKED). Consider storing paths relative to project_root and resolving against it at apply-time, or storing absolute resolved paths when queuing.
| if not file_path.exists(): | ||
| for item in items: | ||
| item.status = FixStatus.BLOCKED | ||
| item.block_reason = "file not found" | ||
| session.blocked_fixes.append(item) | ||
| continue | ||
|
|
||
| issues_for_file = self._queue_items_to_issues(items, file_path) | ||
| try: | ||
| fixed_code, diff = self.fix_engine.fix_file( | ||
| file_path, issues_for_file, dry_run=dry_run, verify=verify | ||
| ) | ||
| except Exception as exc: | ||
| logger.warning("fix_file failed for %s: %s", file_path, exc) | ||
| for item in items: | ||
| item.status = FixStatus.BLOCKED | ||
| item.block_reason = str(exc) | ||
| session.blocked_fixes.append(item) | ||
| continue |
There was a problem hiding this comment.
apply() appends the same FixQueueItem objects to session.applied_fixes / session.blocked_fixes but also leaves them in session.fix_queue. This duplicates entries across lists in the persisted session JSON and makes it harder to reason about counts/state. Consider either (a) keeping all items only in fix_queue and deriving views from status, or (b) moving items out of fix_queue when they become applied/blocked (so each item lives in exactly one list).
| # --fail-on gate | ||
| if fail_on: | ||
| threshold = _LEVEL_RANK[fail_on.upper()] | ||
| should_fail = any( | ||
| session.issues_by_level.get(lvl, 0) > 0 | ||
| for lvl, rank in _LEVEL_RANK.items() | ||
| if rank >= threshold | ||
| ) | ||
| if should_fail: | ||
| raise SystemExit(1) |
There was a problem hiding this comment.
The --fail-on gate checks session.issues_by_level, but that data comes from the initial analysis and is never recomputed after fixes are applied. This means refactron run --fail-on ... can exit 1 even if fixes were applied successfully, or exit 0 even if critical issues remain. Please rerun analysis after apply (or otherwise update issues_by_level based on a post-fix analysis) before deciding whether to exit 1.
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
refactron/cli/refactor.py (1)
241-262:⚠️ Potential issue | 🔴 CriticalThe new autofix path bypasses the command's safety contract.
Because this branch returns at Line 355, it never reaches the existing option handling below. Right now
--previewdoes not prevent writes (pipeline.apply()only seesdry_run), so the default invocation can modify files, and--safety-level/--config/--profile/--environmentare effectively no-ops on this path.Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback".🛠️ Suggested fix
+ safety_map = { + "safe": FixRiskLevel.SAFE, + "low": FixRiskLevel.LOW, + "moderate": FixRiskLevel.MODERATE, + "high": FixRiskLevel.HIGH, + } + effective_dry_run = dry_run or preview + _target_path = Path(target) if target else Path.cwd() _project_root = _target_path if _target_path.is_dir() else _target_path.parent - _pipeline = RefactronPipeline(project_root=_project_root) + _pipeline = RefactronPipeline( + project_root=_project_root, + safety_level=safety_map[safety_level.lower()], + ) @@ _pipeline.apply( _pipeline_session, - dry_run=dry_run, + dry_run=effective_dry_run, verify=verify, )Also applies to: 297-355, 357-380
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 241 - 262, The early-return branch bypasses the safety-first flow so options like --preview/--apply, --safety-level, --config, --profile, --environment and --dry-run are ignored and pipeline.apply() can write files; change the branch that currently returns to instead funnel through the existing safety pipeline (preview → backup → pipeline.apply() → optional rollback) so that --preview causes only a preview (no writes) and pipeline.apply() respects dry_run and safety-level/config/profile/environment and that verify is run prior to apply; locate and update the code around the branch that returns (the block referencing pipeline.apply(), --preview/--apply, --dry-run, --safety-level, and --verify) to remove the premature return and invoke the common safety handlers (preview handling, backup creation, apply call, and rollback path) so all refactors use the same safety-first sequence.
🧹 Nitpick comments (1)
tests/test_cli_run.py (1)
31-38: These mocks never exercise the new pipeline path.All three tests set
mock_pipeline._last_result = None, which makesrun()skipqueue_issues()and never callpipeline.apply(). Populatefile_metricsand assert the queue/apply interactions so regressions in the actual one-shot flow are caught.Also applies to: 47-54, 62-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_cli_run.py` around lines 31 - 38, The tests currently set mock_pipeline._last_result = None so run() always skips queue_issues() and pipeline.apply(); change the mocks for RefactronPipeline in tests/test_cli_run.py to populate mock_pipeline._last_result with a dict containing file_metrics (e.g. {'file_metrics': [...]}), call runner.invoke(run, ...) as before, and add assertions that mock_pipeline.queue_issues was called and mock_pipeline.apply was invoked (or not depending on flags) to ensure the one-shot path is exercised; update the other two test blocks (lines corresponding to the other patches) similarly so all three cases assert queue_issues()/apply() interactions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@refactron/cli/refactor.py`:
- Around line 350-353: The rollback hint prints the wrong flag; update the
string passed to console.print in the blocks that check _applied > 0 (and the
similar block around lines 432-437) to use the new flag name --pipeline-session
instead of --session so the hint points users to the pipeline rollback path;
locate the console.print calls that reference _pipeline_session.session_id and
replace the embedded "--session" token with "--pipeline-session" so the printed
undo command matches the implemented CLI option.
- Around line 300-307: The session/backup lookup is using the caller's CWD
instead of the pipeline's project root; update the code that constructs
SessionStore and BackupManager and the session-loading logic to derive
project_root the same way RefactronPipeline does (use the
_target_path/_project_root logic used when instantiating RefactronPipeline or
use PipelineSession.target when resuming), so SessionStore.load(session_id) and
BackupManager operate against the pipeline's project_root rather than
Path.cwd(); apply the same change to the other affected block referenced (around
the code handling rollback / lines 461-478) so all session and backup resolution
consistently uses RefactronPipeline.project_root or PipelineSession.target.
In `@refactron/cli/run.py`:
- Around line 35-40: The CLI currently performs destructive changes by default;
change the click option for "--dry-run" to default=True (so preview is the
default) and remove the unconditional call to pipeline.apply(... dry_run=False)
in the run command; instead call a non-destructive pipeline.preview(...) after
analysis, and only invoke pipeline.backup(...) then pipeline.apply(...) when an
explicit confirmation/flag (e.g. an "--apply" or interactive yes prompt) is
provided; ensure the flow follows preview → backup → apply → optional rollback
and reference the existing pipeline.apply and pipeline.preview calls in run.py
so the code only writes files when the explicit apply path is taken.
- Around line 120-129: The fail-on check uses stale counts in
session.issues_by_level from the pre-apply analysis; after calling
pipeline.apply() you must recompute severity totals before evaluating --fail-on.
Update the code so that once pipeline.apply() completes you invoke the session
analysis routine that refreshes issue counts (e.g. call the method that re-runs
analysis or recomputes totals on session such that session.issues_by_level is
updated), then compute threshold via _LEVEL_RANK[fail_on.upper()] and evaluate
should_fail against the refreshed session.issues_by_level and raise
SystemExit(1) if appropriate.
In `@refactron/cli/status.py`:
- Around line 27-30: Wrap calls to SessionStore.list_sessions() and
SessionStore.load() in try/except at the CLI boundary (in the status command) so
any IO/Unicode/parse errors from unreadable or non-UTF8 files under
.refactron/sessions/ are caught; catch broad exceptions like
OSError/UnicodeDecodeError/ValueError (or Exception as a last resort), and print
a concise, friendly error message via the CLI logger/console instead of letting
the traceback propagate, then return a non-zero exit code. Locate the calls to
SessionStore.list_sessions() and SessionStore.load() in the status command and
add the try/except blocks around them, ensuring the error message references the
sessions directory/context so users know which file caused the problem.
In `@refactron/core/pipeline_session.py`:
- Around line 149-152: The save() implementation writes session JSON in-place
which can produce a truncated/corrupt newest file; change save(session:
PipelineSession) to write to a temporary file in the same directory (e.g.,
_session_path(session.session_id).with_suffix(".tmp")), flush and fsync the temp
file, then atomically rename/move it to the final path; reference
session.to_dict(), sessions_dir and _session_path. Update load_latest() to not
assume the single lexicographically newest file is valid: list files in
sessions_dir sorted descending, attempt to open and json.loads each in order,
returning the first successfully parsed PipelineSession (skip malformed files
and continue), and only return None if none parse successfully.
- Around line 146-147: Treat the session_id as untrusted data: validate it in
_session_path and any callers (reject if it contains path separators '/' or '\\'
or the segment '..', or use a strict allowlist/sanitization), and when loading a
session (the load() code that reads the JSON) verify the parsed object's
session_id exactly equals the requested session_id and raise an error if not;
ensure all places that build paths from session_id use _session_path so the
validation is centralized and the code fails closed on mismatches or invalid
ids.
- Around line 142-145: The constructor SessionStore.__init__ is missing a return
type annotation which violates disallow_untyped_defs; update the method
signature in class SessionStore (def __init__(self, root_dir: Optional[Path] =
None)) to include an explicit -> None return annotation, and ensure Optional and
Path are imported/available in that module so the type hint resolves.
In `@refactron/core/pipeline.py`:
- Around line 136-174: Before applying fixes when not dry_run, create the backup
session (backup_manager.create_backup_session) and then back up each target file
into that session prior to calling fix_engine.fix_file; after backing files,
call BackupManager.validate_backup_integrity(session.backup_session_id) and if
it returns any corrupt_paths mark all related FixItems as BLOCKED (set
block_reason) and append them to session.blocked_fixes, then skip applying fixes
for those files. Ensure when setting item.backup_path you point to
backup_manager.backup_dir / session.backup_session_id / file_path.name and only
proceed to call fix_engine.fix_file(file_path, ...) if the backup validation
succeeded.
- Around line 96-118: queue_issues() currently probes fixers using a synthetic
snippet and stores minimal info in FixQueueItem, and apply() rebuilds a generic
CodeIssue causing loss of original identity; change FixQueueItem to persist the
original issue payload (rule_id, category, metadata, full message, etc.) and
have _find_fixer_name call each fixer's preview against the real file contents
(load the file at issue.file_path) instead of "x = 1\n"; ensure apply()
reconstructs the CodeIssue from the stored original payload (not as
IssueCategory.COMPLEXITY) before calling AutoFixEngine.fix_file(), so
AutoFixEngine.fix_file() receives the original issue with correct
rule_id/metadata and previews are based on actual file text.
---
Outside diff comments:
In `@refactron/cli/refactor.py`:
- Around line 241-262: The early-return branch bypasses the safety-first flow so
options like --preview/--apply, --safety-level, --config, --profile,
--environment and --dry-run are ignored and pipeline.apply() can write files;
change the branch that currently returns to instead funnel through the existing
safety pipeline (preview → backup → pipeline.apply() → optional rollback) so
that --preview causes only a preview (no writes) and pipeline.apply() respects
dry_run and safety-level/config/profile/environment and that verify is run prior
to apply; locate and update the code around the branch that returns (the block
referencing pipeline.apply(), --preview/--apply, --dry-run, --safety-level, and
--verify) to remove the premature return and invoke the common safety handlers
(preview handling, backup creation, apply call, and rollback path) so all
refactors use the same safety-first sequence.
---
Nitpick comments:
In `@tests/test_cli_run.py`:
- Around line 31-38: The tests currently set mock_pipeline._last_result = None
so run() always skips queue_issues() and pipeline.apply(); change the mocks for
RefactronPipeline in tests/test_cli_run.py to populate
mock_pipeline._last_result with a dict containing file_metrics (e.g.
{'file_metrics': [...]}), call runner.invoke(run, ...) as before, and add
assertions that mock_pipeline.queue_issues was called and mock_pipeline.apply
was invoked (or not depending on flags) to ensure the one-shot path is
exercised; update the other two test blocks (lines corresponding to the other
patches) similarly so all three cases assert queue_issues()/apply()
interactions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 27088f85-ef65-4eb6-bb18-9e7a80229609
📒 Files selected for processing (13)
refactron/cli/analysis.pyrefactron/cli/main.pyrefactron/cli/refactor.pyrefactron/cli/run.pyrefactron/cli/status.pyrefactron/core/pipeline.pyrefactron/core/pipeline_session.pytests/test_cli.pytests/test_cli_run.pytests/test_cli_status.pytests/test_pipeline_integration.pytests/test_pipeline_orchestrator.pytests/test_pipeline_session.py
| _target_path = Path(target) if target else Path.cwd() | ||
| _project_root = _target_path if _target_path.is_dir() else _target_path.parent | ||
| _pipeline = RefactronPipeline(project_root=_project_root) | ||
|
|
||
| if session_id: | ||
| _pipeline_session = _pipeline.store.load(session_id) | ||
| if _pipeline_session is None: | ||
| console.print(f"[red]Session not found: {session_id}[/red]") |
There was a problem hiding this comment.
Resolve sessions/backups from the pipeline root, not the caller's location.
RefactronPipeline persists sessions under its project_root, but autofix --session derives that root from the current target/cwd and rollback --pipeline-session hard-codes Path.cwd() for both SessionStore and BackupManager. Resuming or rolling back the same session from a subdirectory or another shell location will miss the saved session/backups or target the wrong project. Reuse the same root-discovery logic in both flows, or derive it from PipelineSession.target.
Also applies to: 461-478
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/refactor.py` around lines 300 - 307, The session/backup lookup
is using the caller's CWD instead of the pipeline's project root; update the
code that constructs SessionStore and BackupManager and the session-loading
logic to derive project_root the same way RefactronPipeline does (use the
_target_path/_project_root logic used when instantiating RefactronPipeline or
use PipelineSession.target when resuming), so SessionStore.load(session_id) and
BackupManager operate against the pipeline's project_root rather than
Path.cwd(); apply the same change to the other affected block referenced (around
the code handling rollback / lines 461-478) so all session and backup resolution
consistently uses RefactronPipeline.project_root or PipelineSession.target.
| @click.option( | ||
| "--dry-run", | ||
| is_flag=True, | ||
| default=False, | ||
| help="Show what would be fixed without writing any files", | ||
| ) |
There was a problem hiding this comment.
Don't make run write by default.
The non-dry path calls pipeline.apply(... dry_run=False) immediately after analysis, so this command skips the mandatory preview stage and becomes destructive by default. Please make preview the default outcome and require an explicit apply step or confirmation before any write.
Based on learnings: All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback.
Also applies to: 99-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/run.py` around lines 35 - 40, The CLI currently performs
destructive changes by default; change the click option for "--dry-run" to
default=True (so preview is the default) and remove the unconditional call to
pipeline.apply(... dry_run=False) in the run command; instead call a
non-destructive pipeline.preview(...) after analysis, and only invoke
pipeline.backup(...) then pipeline.apply(...) when an explicit confirmation/flag
(e.g. an "--apply" or interactive yes prompt) is provided; ensure the flow
follows preview → backup → apply → optional rollback and reference the existing
pipeline.apply and pipeline.preview calls in run.py so the code only writes
files when the explicit apply path is taken.
| # --fail-on gate | ||
| if fail_on: | ||
| threshold = _LEVEL_RANK[fail_on.upper()] | ||
| should_fail = any( | ||
| session.issues_by_level.get(lvl, 0) > 0 | ||
| for lvl, rank in _LEVEL_RANK.items() | ||
| if rank >= threshold | ||
| ) | ||
| if should_fail: | ||
| raise SystemExit(1) |
There was a problem hiding this comment.
--fail-on is checking the pre-fix snapshot.
session.issues_by_level is populated during the initial analyze and pipeline.apply() never refreshes it, so a successful write run can still exit 1 on the old counts. Recompute the severity totals from a post-apply analysis before evaluating this gate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/run.py` around lines 120 - 129, The fail-on check uses stale
counts in session.issues_by_level from the pre-apply analysis; after calling
pipeline.apply() you must recompute severity totals before evaluating --fail-on.
Update the code so that once pipeline.apply() completes you invoke the session
analysis routine that refreshes issue counts (e.g. call the method that re-runs
analysis or recomputes totals on session such that session.issues_by_level is
updated), then compute threshold via _LEVEL_RANK[fail_on.upper()] and evaluate
should_fail against the refreshed session.issues_by_level and raise
SystemExit(1) if appropriate.
| store = SessionStore(root_dir=Path(project_root)) | ||
|
|
||
| if list_sessions: | ||
| sessions = store.list_sessions() |
There was a problem hiding this comment.
Catch session-store read failures at the CLI boundary.
SessionStore.list_sessions() / load() only handle a narrow subset of parse errors. An unreadable or non-UTF8 file under .refactron/sessions/ can still raise here and crash refactron status, so wrap these calls and show a friendly error instead of a traceback.
Also applies to: 53-58
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/status.py` around lines 27 - 30, Wrap calls to
SessionStore.list_sessions() and SessionStore.load() in try/except at the CLI
boundary (in the status command) so any IO/Unicode/parse errors from unreadable
or non-UTF8 files under .refactron/sessions/ are caught; catch broad exceptions
like OSError/UnicodeDecodeError/ValueError (or Exception as a last resort), and
print a concise, friendly error message via the CLI logger/console instead of
letting the traceback propagate, then return a non-zero exit code. Locate the
calls to SessionStore.list_sessions() and SessionStore.load() in the status
command and add the try/except blocks around them, ensuring the error message
references the sessions directory/context so users know which file caused the
problem.
| def __init__(self, root_dir: Optional[Path] = None): | ||
| self.root_dir = Path(root_dir) if root_dir else Path.cwd() | ||
| self.sessions_dir = self.root_dir / self.SESSIONS_DIR | ||
|
|
There was a problem hiding this comment.
Add the missing return annotation on SessionStore.__init__.
This constructor is still untyped, which is enough to trip disallow_untyped_defs under refactron/.
🛠️ Suggested fix
- def __init__(self, root_dir: Optional[Path] = None):
+ def __init__(self, root_dir: Optional[Path] = None) -> None:
self.root_dir = Path(root_dir) if root_dir else Path.cwd()
self.sessions_dir = self.root_dir / self.SESSIONS_DIR📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__(self, root_dir: Optional[Path] = None): | |
| self.root_dir = Path(root_dir) if root_dir else Path.cwd() | |
| self.sessions_dir = self.root_dir / self.SESSIONS_DIR | |
| def __init__(self, root_dir: Optional[Path] = None) -> None: | |
| self.root_dir = Path(root_dir) if root_dir else Path.cwd() | |
| self.sessions_dir = self.root_dir / self.SESSIONS_DIR |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/core/pipeline_session.py` around lines 142 - 145, The constructor
SessionStore.__init__ is missing a return type annotation which violates
disallow_untyped_defs; update the method signature in class SessionStore (def
__init__(self, root_dir: Optional[Path] = None)) to include an explicit -> None
return annotation, and ensure Optional and Path are imported/available in that
module so the type hint resolves.
| def _session_path(self, session_id: str) -> Path: | ||
| return self.sessions_dir / f"{session_id}.json" |
There was a problem hiding this comment.
Treat session_id as data, not a path fragment.
_session_path() interpolates raw session ids into a filesystem path, and load() still returns the parsed object even when its embedded session_id does not match the requested one. A CLI value like ../other/session can escape .refactron/sessions, and a tampered JSON file can still be treated as the requested session. Reject ids that contain path separators / .. and fail closed on mismatches.
🛠️ Suggested fix
+import re
@@
+_SESSION_ID_RE = re.compile(r"^sess_\d{8}_\d{6}_\d{6}$")
+
@@
def _session_path(self, session_id: str) -> Path:
+ if not _SESSION_ID_RE.fullmatch(session_id):
+ raise ValueError(f"Invalid session id: {session_id}")
return self.sessions_dir / f"{session_id}.json"
@@
if session.session_id != session_id:
_logger.warning(
"Session ID mismatch: requested %s, got %s", session_id, session.session_id
)
- return session
+ return None
+ return sessionAlso applies to: 162-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/core/pipeline_session.py` around lines 146 - 147, Treat the
session_id as untrusted data: validate it in _session_path and any callers
(reject if it contains path separators '/' or '\\' or the segment '..', or use a
strict allowlist/sanitization), and when loading a session (the load() code that
reads the JSON) verify the parsed object's session_id exactly equals the
requested session_id and raise an error if not; ensure all places that build
paths from session_id use _session_path so the validation is centralized and the
code fails closed on mismatches or invalid ids.
| def save(self, session: PipelineSession) -> None: | ||
| self.sessions_dir.mkdir(parents=True, exist_ok=True) | ||
| path = self._session_path(session.session_id) | ||
| path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8") |
There was a problem hiding this comment.
Persist session files atomically and fall back to the newest valid one.
save() writes JSON in place, and load_latest() only attempts to parse the single lexicographically newest file. If the process dies mid-write, that newest session becomes malformed and every load_latest() caller starts returning None even though older valid sessions still exist.
🛠️ Suggested fix
def save(self, session: PipelineSession) -> None:
self.sessions_dir.mkdir(parents=True, exist_ok=True)
path = self._session_path(session.session_id)
- path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8")
+ tmp_path = path.with_suffix(".json.tmp")
+ tmp_path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8")
+ tmp_path.replace(path)
@@
def load_latest(self) -> Optional[PipelineSession]:
if not self.sessions_dir.exists():
return None
paths = sorted(self.sessions_dir.glob("*.json"))
if not paths:
return None
- try:
- return PipelineSession.from_dict(json.loads(paths[-1].read_text(encoding="utf-8")))
- except (json.JSONDecodeError, KeyError, ValueError):
- return None
+ for path in reversed(paths):
+ try:
+ return PipelineSession.from_dict(json.loads(path.read_text(encoding="utf-8")))
+ except (json.JSONDecodeError, KeyError, ValueError, TypeError):
+ continue
+ return NoneAlso applies to: 168-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/core/pipeline_session.py` around lines 149 - 152, The save()
implementation writes session JSON in-place which can produce a
truncated/corrupt newest file; change save(session: PipelineSession) to write to
a temporary file in the same directory (e.g.,
_session_path(session.session_id).with_suffix(".tmp")), flush and fsync the temp
file, then atomically rename/move it to the final path; reference
session.to_dict(), sessions_dir and _session_path. Update load_latest() to not
assume the single lexicographically newest file is valid: list files in
sessions_dir sorted descending, attempt to open and json.loads each in order,
returning the first successfully parsed PipelineSession (skip malformed files
and continue), and only return None if none parse successfully.
| fixer_name = self._find_fixer_name(issue) | ||
| item = FixQueueItem( | ||
| issue_id=f"issue_{idx:04d}", | ||
| file_path=str(issue.file_path), | ||
| line_number=issue.line_number, | ||
| level=_LEVEL_MAP.get(issue.level, "INFO"), | ||
| message=issue.message, | ||
| fixer_name=fixer_name or "none", | ||
| status=FixStatus.PENDING if fixer_name else FixStatus.SKIPPED, | ||
| ) | ||
| session.fix_queue.append(item) | ||
| self.store.save(session) | ||
|
|
||
| def _find_fixer_name(self, issue: CodeIssue) -> Optional[str]: | ||
| if not self.fix_engine.can_fix(issue): | ||
| return None | ||
| for name, fixer in self.fix_engine.fixers.items(): | ||
| try: | ||
| result = fixer.preview(issue, "x = 1\n") | ||
| if result.success: | ||
| return name | ||
| except Exception: | ||
| continue |
There was a problem hiding this comment.
Queue/apply loses the original issue identity.
queue_issues() decides PENDING by probing fixers against "x = 1\n", then apply() rebuilds each queued item as a fresh CodeIssue with IssueCategory.COMPLEXITY and no rule_id/metadata. AutoFixEngine.fix_file() re-checks those reconstructed issues, so non-complexity findings can be queued as fixable and later skipped or misapplied. Persist the original issue payload and preview against the file's real contents instead of round-tripping through a synthetic issue.
Also applies to: 185-197
🧰 Tools
🪛 Ruff (0.15.9)
[error] 117-118: try-except-continue detected, consider logging the exception
(S112)
[warning] 117-117: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/core/pipeline.py` around lines 96 - 118, queue_issues() currently
probes fixers using a synthetic snippet and stores minimal info in FixQueueItem,
and apply() rebuilds a generic CodeIssue causing loss of original identity;
change FixQueueItem to persist the original issue payload (rule_id, category,
metadata, full message, etc.) and have _find_fixer_name call each fixer's
preview against the real file contents (load the file at issue.file_path)
instead of "x = 1\n"; ensure apply() reconstructs the CodeIssue from the stored
original payload (not as IssueCategory.COMPLEXITY) before calling
AutoFixEngine.fix_file(), so AutoFixEngine.fix_file() receives the original
issue with correct rule_id/metadata and previews are based on actual file text.
| backup_session_id: Optional[str] = None | ||
| if not dry_run and files_to_fix: | ||
| backup_session_id = self.backup_manager.create_backup_session( | ||
| description=f"autofix session {session.session_id}" | ||
| ) | ||
| session.backup_session_id = backup_session_id | ||
|
|
||
| for file_path_str, items in files_to_fix.items(): | ||
| file_path = Path(file_path_str) | ||
| if not file_path.exists(): | ||
| for item in items: | ||
| item.status = FixStatus.BLOCKED | ||
| item.block_reason = "file not found" | ||
| session.blocked_fixes.append(item) | ||
| continue | ||
|
|
||
| issues_for_file = self._queue_items_to_issues(items, file_path) | ||
| try: | ||
| fixed_code, diff = self.fix_engine.fix_file( | ||
| file_path, issues_for_file, dry_run=dry_run, verify=verify | ||
| ) | ||
| except Exception as exc: | ||
| logger.warning("fix_file failed for %s: %s", file_path, exc) | ||
| for item in items: | ||
| item.status = FixStatus.BLOCKED | ||
| item.block_reason = str(exc) | ||
| session.blocked_fixes.append(item) | ||
| continue | ||
|
|
||
| if diff: | ||
| for item in items: | ||
| item.diff = diff | ||
| if not dry_run: | ||
| item.status = FixStatus.APPLIED | ||
| if backup_session_id: | ||
| item.backup_path = str( | ||
| self.backup_manager.backup_dir / backup_session_id / file_path.name | ||
| ) | ||
| session.applied_fixes.append(item) |
There was a problem hiding this comment.
No file backup is taken before non-dry-run writes.
create_backup_session() only gives you a session id here; nothing in this method copies the files that fix_file(..., dry_run=False) is about to overwrite. The new rollback --pipeline-session flow depends on session.backup_session_id, so this can produce sessions that look rollback-capable but have nothing restorable. Back up each target file before writing, then validate the backup session before continuing.
Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback". As per coding guidelines, "refactron/core/*.py: Use BackupManager.validate_backup_integrity(session_id) for backup integrity validation, returning (valid_paths, corrupt_paths)".
🧰 Tools
🪛 Ruff (0.15.9)
[warning] 154-154: Unpacked variable fixed_code is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
[warning] 157-157: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/core/pipeline.py` around lines 136 - 174, Before applying fixes
when not dry_run, create the backup session
(backup_manager.create_backup_session) and then back up each target file into
that session prior to calling fix_engine.fix_file; after backing files, call
BackupManager.validate_backup_integrity(session.backup_session_id) and if it
returns any corrupt_paths mark all related FixItems as BLOCKED (set
block_reason) and append them to session.blocked_fixes, then skip applying fixes
for those files. Ensure when setting item.backup_path you point to
backup_manager.backup_dir / session.backup_session_id / file_path.name and only
proceed to call fix_engine.fix_file(file_path, ...) if the backup validation
succeeded.
… between commands
`IncrementalAnalysisTracker` was filtering out unchanged files on every run after the first, causing `refactron analyze` to return zero issues the second time. Incremental filtering is a programmatic-API optimization and has no place in the CLI where users expect full results every run. Both `analysis.py` and `RefactronPipeline.analyze()` now unconditionally set `cfg.enable_incremental_analysis = False` before running. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
refactron/cli/analysis.py (1)
220-236:⚠️ Potential issue | 🟠 Major
--format jsonskips the new session workflow entirely.The JSON branch exits before this session block runs, so
--fix-onnever queues anything and no current session is written in CI/scripted mode. Move the session persistence ahead of the JSONSystemExitso the new pipeline behavior is format-independent.Also applies to: 304-323
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/analysis.py` around lines 220 - 236, The JSON output branch currently prints payload and calls raise SystemExit before the session persistence logic runs, which prevents --fix-on from queuing fixes and stops the current session from being written; move the session save/queueing code (the session persistence logic used elsewhere in this file) to execute before the JSON branch's SystemExit so that regardless of output_format (including the other JSON block at lines ~304-323) the code that enqueues fixes and writes the current session runs prior to exiting; ensure you reference and invoke the same session persistence routine used by the normal output path and then keep the click.echo(_json.dumps(payload, indent=2)) and raise SystemExit(1 if summary["critical"] > 0 else 0) after that.
♻️ Duplicate comments (7)
refactron/core/pipeline_session.py (3)
142-144:⚠️ Potential issue | 🟡 MinorAdd the missing
-> NoneonSessionStore.__init__.This constructor is still untyped, which is enough to trip
disallow_untyped_defsunderrefactron/.Proposed fix
- def __init__(self, root_dir: Optional[Path] = None): + def __init__(self, root_dir: Optional[Path] = None) -> None: self.root_dir = Path(root_dir) if root_dir else Path.cwd() self.sessions_dir = self.root_dir / self.SESSIONS_DIRAs per coding guidelines, "
refactron/**/*.py: Type annotations are required in refactron/ with mypy disallow_untyped_defs = true enabled".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/core/pipeline_session.py` around lines 142 - 144, The constructor SessionStore.__init__ is missing a return type annotation; add the explicit "-> None" to the __init__ signature in the SessionStore class (the method that sets self.root_dir and self.sessions_dir and references self.SESSIONS_DIR) so the method is fully typed and satisfies disallow_untyped_defs under refactron.
149-152:⚠️ Potential issue | 🟠 MajorMake session persistence atomic and skip corrupt newest files.
save()still writes JSON in place, andload_latest()still only attempts to parse the single newest file. If the process dies mid-write, one truncated newest session makes everyload_latest()caller returnNoneeven when older valid sessions still exist.Proposed fix
+import os @@ def save(self, session: PipelineSession) -> None: self.sessions_dir.mkdir(parents=True, exist_ok=True) path = self._session_path(session.session_id) - path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8") + tmp_path = path.with_suffix(".json.tmp") + with tmp_path.open("w", encoding="utf-8") as fh: + json.dump(session.to_dict(), fh, indent=2) + fh.flush() + os.fsync(fh.fileno()) + tmp_path.replace(path) @@ def load_latest(self) -> Optional[PipelineSession]: if not self.sessions_dir.exists(): return None paths = sorted(self.sessions_dir.glob("*.json")) if not paths: return None - try: - return PipelineSession.from_dict(json.loads(paths[-1].read_text(encoding="utf-8"))) - except (json.JSONDecodeError, KeyError, ValueError): - return None + for path in reversed(paths): + try: + return PipelineSession.from_dict(json.loads(path.read_text(encoding="utf-8"))) + except (json.JSONDecodeError, KeyError, ValueError, TypeError): + continue + return NoneAlso applies to: 168-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/core/pipeline_session.py` around lines 149 - 152, The save() implementation writes session JSON in-place and can leave a truncated newest file; update save() (function save and helper _session_path/sessions_dir) to write atomically by writing to a temporary file in the same sessions_dir (e.g., session_id + ".tmp" or use a temp name), fsync the file, then os.replace/rename to the final path to ensure an atomic swap; then modify load_latest() to iterate session files from newest to oldest (by mtime or lexicographic order), attempting to read and json.loads each file and skipping any that raise JSON parse errors or IO errors until a valid session is found (returning None only if none valid).
146-147:⚠️ Potential issue | 🟠 MajorValidate
session_idbefore turning it into a path.
_session_path()still interpolates raw ids into a filesystem path, so values like../other/sessioncan escape.refactron/sessions.load()also still returns the parsed object even when its embeddedsession_iddoes not match the requested id, which means a tampered JSON file can be treated as the requested session.Proposed fix
+import re @@ +_SESSION_ID_RE = re.compile(r"^sess_\d{8}_\d{6}_\d{6}$") + @@ def _session_path(self, session_id: str) -> Path: + if not _SESSION_ID_RE.fullmatch(session_id): + raise ValueError(f"Invalid session id: {session_id}") return self.sessions_dir / f"{session_id}.json" @@ if session.session_id != session_id: _logger.warning( "Session ID mismatch: requested %s, got %s", session_id, session.session_id ) - return session + return None + return sessionAlso applies to: 154-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/core/pipeline_session.py` around lines 146 - 147, Validate session_id before constructing file paths and when loading JSON: in _session_path(session_id) ensure session_id contains only a safe token (e.g., alphanumeric, hyphen/underscore) or normalize and reject any value that would traverse directories (prevent characters like '/' or '..'); build the path only after validation (instead of directly using sessions_dir / f"{session_id}.json"). In the load(...) method, after reading and parsing the file, verify the parsed object's embedded session_id equals the requested session_id and raise an error if it differs (do not return parsed data on mismatch). Use the existing symbols sessions_dir, _session_path, and load to locate where to add validation and the integrity check, and raise a clear ValueError or custom exception when validation fails.refactron/cli/status.py (2)
108-112:⚠️ Potential issue | 🟠 MajorThe rollback hint should use
--pipeline-session.
rollback --sessionis the backup-session path, so this generated next step will not undo the pipeline session being displayed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/status.py` around lines 108 - 112, The rollback hint currently prints "refactron rollback --session {session.session_id}" which is the backup-session flag; change it to use the pipeline-session flag so the displayed pipeline session can be undone. Update the string in the status display (where console.print is called for the applied branch) to use "--pipeline-session {session.session_id}" instead of "--session", ensuring the condition that prints when applied remains the same (references: session.session_id, applied, the console.print call in status.py).
39-67:⚠️ Potential issue | 🟠 MajorHandle unreadable or corrupt session files at the CLI boundary.
store.list_sessions(),load_current(),load(), andload_latest()can still surfaceOSError,UnicodeDecodeError, orTypeErrorfrom.refactron/. Right now that turnsrefactron statusinto a traceback instead of a friendly CLI error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/status.py` around lines 39 - 67, Wrap calls to store.list_sessions(), store.load(session_id), store.load_current(), and store.load_latest() in try/except that catches OSError, UnicodeDecodeError, and TypeError at the CLI boundary in status.py; on exception print a concise user-friendly error via console.print (e.g., "Error reading session data: <brief msg>") and return/non-zero exit so a traceback is not shown. Ensure both the list_sessions branch and the session-loading branch around session_id use this error handling so unreadable/corrupt files are handled uniformly.refactron/cli/refactor.py (2)
309-315:⚠️ Potential issue | 🟠 MajorResolve sessions and backups from the pipeline's project root, not the caller's cwd.
Both the
autofix --sessionpath and the new rollback path still buildSessionStore/BackupManagerfrom_target_path.parentorPath.cwd(). Resuming or rolling back the same session from a subdirectory or another shell location will miss the saved session/backups or target the wrong project.Also applies to: 491-509
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 309 - 315, The session and backup lookup currently use the caller cwd or _target_path.parent instead of the pipeline's project root; change construction of SessionStore and BackupManager to use the RefactronPipeline instance's project_root (use _pipeline.project_root) so _pipeline.store.load(session_id) and any BackupManager operations resolve against the actual project root. Update the creation points that build SessionStore/BackupManager (the code around where RefactronPipeline is instantiated and the separate block referenced at the other location) to pass _pipeline.project_root rather than Path.cwd() or _target_path.parent.
373-376:⚠️ Potential issue | 🟠 MajorUse
--pipeline-sessionin the undo hint.The printed command still sends users into the backup-session rollback path instead of the new pipeline rollback flow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 373 - 376, The undo hint printed when _applied > 0 uses the old flag and sends users to backup-session rollback; update the console.print invocation that builds the hint (where _pipeline_session.session_id is referenced) to show the new flag name --pipeline-session instead of --session so the suggested command is "refactron rollback --pipeline-session <session_id>" and points users to the pipeline rollback flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@refactron/cli/analysis.py`:
- Around line 285-287: The code builds _project_root from the raw target (using
_target_path.parent) which makes nested targets persist the session under the
nested folder; replace that logic with the existing project-root detection
helper used elsewhere (call that helper with the provided target/_target_path to
compute the true repository root) and pass the returned root into
RefactronPipeline(project_root=...) so sessions are persisted under the
repository root; apply the same replacement for the other occurrence around
lines 312-313 (i.e., stop using _target_path.parent and use the shared
project-root helper instead).
In `@refactron/cli/refactor.py`:
- Around line 494-495: The current code unconditionally marks a session
ROLLED_BACK and removes the active session file after calling
BackupManager.rollback_session(), which can return partial failures; change the
logic in the block around _resolved_id (and the similar block at lines ~509-513)
to inspect the result of BackupManager.rollback_session() and only (1) set the
session state to ROLLED_BACK and (2) delete .refactron/current when
rollback_session reports all restores succeeded; if rollback_session reports any
failures, leave the session state unchanged (or set an explicit PARTIAL_ROLLBACK
status if available) and do not remove .refactron/current unless _resolved_id
matches the currently active session and the rollback fully succeeded; reference
functions/vars: BackupManager.rollback_session(), _resolved_id,
pipeline_session_id, _store.get_current_id(), and the ROLLED_BACK state to
locate and update the code paths.
- Around line 306-312: The new session-aware branch constructs RefactronPipeline
using _target_path/_project_root and returns early, which bypasses
_load_config(), the safety_level mapping, and flags like --config, --profile,
and --environment; update the flow so that before creating or returning the
RefactronPipeline instance (_pipeline) you call _load_config() and apply the
resolved configuration and safety_level (or pass them into RefactronPipeline
constructor) so the session-aware path honors --config, --profile, --environment
and --safety-level exactly like the original branch did.
- Around line 348-352: The call to _pipeline.apply(...) is missing the preview
argument so the CLI's preview flag is ignored (causing autofix to write
changes); update the two places where _pipeline.apply is invoked (the call using
_pipeline_session and the similar block further down) to pass preview=preview
along with dry_run=dry_run and verify=verify so the
RefactronPipeline.apply(preview, dry_run, verify, ...) safety-first flow
(preview → backup → apply → optional rollback) is preserved.
---
Outside diff comments:
In `@refactron/cli/analysis.py`:
- Around line 220-236: The JSON output branch currently prints payload and calls
raise SystemExit before the session persistence logic runs, which prevents
--fix-on from queuing fixes and stops the current session from being written;
move the session save/queueing code (the session persistence logic used
elsewhere in this file) to execute before the JSON branch's SystemExit so that
regardless of output_format (including the other JSON block at lines ~304-323)
the code that enqueues fixes and writes the current session runs prior to
exiting; ensure you reference and invoke the same session persistence routine
used by the normal output path and then keep the click.echo(_json.dumps(payload,
indent=2)) and raise SystemExit(1 if summary["critical"] > 0 else 0) after that.
---
Duplicate comments:
In `@refactron/cli/refactor.py`:
- Around line 309-315: The session and backup lookup currently use the caller
cwd or _target_path.parent instead of the pipeline's project root; change
construction of SessionStore and BackupManager to use the RefactronPipeline
instance's project_root (use _pipeline.project_root) so
_pipeline.store.load(session_id) and any BackupManager operations resolve
against the actual project root. Update the creation points that build
SessionStore/BackupManager (the code around where RefactronPipeline is
instantiated and the separate block referenced at the other location) to pass
_pipeline.project_root rather than Path.cwd() or _target_path.parent.
- Around line 373-376: The undo hint printed when _applied > 0 uses the old flag
and sends users to backup-session rollback; update the console.print invocation
that builds the hint (where _pipeline_session.session_id is referenced) to show
the new flag name --pipeline-session instead of --session so the suggested
command is "refactron rollback --pipeline-session <session_id>" and points users
to the pipeline rollback flow.
In `@refactron/cli/status.py`:
- Around line 108-112: The rollback hint currently prints "refactron rollback
--session {session.session_id}" which is the backup-session flag; change it to
use the pipeline-session flag so the displayed pipeline session can be undone.
Update the string in the status display (where console.print is called for the
applied branch) to use "--pipeline-session {session.session_id}" instead of
"--session", ensuring the condition that prints when applied remains the same
(references: session.session_id, applied, the console.print call in status.py).
- Around line 39-67: Wrap calls to store.list_sessions(),
store.load(session_id), store.load_current(), and store.load_latest() in
try/except that catches OSError, UnicodeDecodeError, and TypeError at the CLI
boundary in status.py; on exception print a concise user-friendly error via
console.print (e.g., "Error reading session data: <brief msg>") and
return/non-zero exit so a traceback is not shown. Ensure both the list_sessions
branch and the session-loading branch around session_id use this error handling
so unreadable/corrupt files are handled uniformly.
In `@refactron/core/pipeline_session.py`:
- Around line 142-144: The constructor SessionStore.__init__ is missing a return
type annotation; add the explicit "-> None" to the __init__ signature in the
SessionStore class (the method that sets self.root_dir and self.sessions_dir and
references self.SESSIONS_DIR) so the method is fully typed and satisfies
disallow_untyped_defs under refactron.
- Around line 149-152: The save() implementation writes session JSON in-place
and can leave a truncated newest file; update save() (function save and helper
_session_path/sessions_dir) to write atomically by writing to a temporary file
in the same sessions_dir (e.g., session_id + ".tmp" or use a temp name), fsync
the file, then os.replace/rename to the final path to ensure an atomic swap;
then modify load_latest() to iterate session files from newest to oldest (by
mtime or lexicographic order), attempting to read and json.loads each file and
skipping any that raise JSON parse errors or IO errors until a valid session is
found (returning None only if none valid).
- Around line 146-147: Validate session_id before constructing file paths and
when loading JSON: in _session_path(session_id) ensure session_id contains only
a safe token (e.g., alphanumeric, hyphen/underscore) or normalize and reject any
value that would traverse directories (prevent characters like '/' or '..');
build the path only after validation (instead of directly using sessions_dir /
f"{session_id}.json"). In the load(...) method, after reading and parsing the
file, verify the parsed object's embedded session_id equals the requested
session_id and raise an error if it differs (do not return parsed data on
mismatch). Use the existing symbols sessions_dir, _session_path, and load to
locate where to add validation and the integrity check, and raise a clear
ValueError or custom exception when validation fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1cdccfa1-0941-4e0c-b022-a959b38ad3e1
📒 Files selected for processing (6)
refactron/cli/analysis.pyrefactron/cli/refactor.pyrefactron/cli/run.pyrefactron/cli/status.pyrefactron/core/pipeline_session.pytests/test_cli_status.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_cli_status.py
- refactron/cli/run.py
| _target_path = Path(target) if target else Path.cwd() | ||
| _project_root = _target_path if _target_path.is_dir() else _target_path.parent | ||
| _pipeline = RefactronPipeline(project_root=_project_root) |
There was a problem hiding this comment.
Persist the session under the project root, not target.parent.
For nested targets, _project_root becomes the selected directory or file parent, so the session/current pointer lands under that nested path instead of the repository root. refactron status and refactron autofix from the workspace root then won't see the session. Reuse the same project-root detection helper you already use elsewhere instead of rebuilding it from raw target.
Also applies to: 312-313
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/analysis.py` around lines 285 - 287, The code builds
_project_root from the raw target (using _target_path.parent) which makes nested
targets persist the session under the nested folder; replace that logic with the
existing project-root detection helper used elsewhere (call that helper with the
provided target/_target_path to compute the true repository root) and pass the
returned root into RefactronPipeline(project_root=...) so sessions are persisted
under the repository root; apply the same replacement for the other occurrence
around lines 312-313 (i.e., stop using _target_path.parent and use the shared
project-root helper instead).
| # ── Session-aware pipeline ──────────────────────────────────────── | ||
| from refactron.core.pipeline import RefactronPipeline | ||
|
|
||
| _target_path = Path(target) if target else Path.cwd() | ||
| _project_root = _target_path if _target_path.is_dir() else _target_path.parent | ||
| _pipeline = RefactronPipeline(project_root=_project_root) | ||
|
|
There was a problem hiding this comment.
The new session-aware path bypasses the command's config and safety settings.
Because this branch returns at Line 378, the old _load_config() call and safety_level mapping are now unreachable. --config, --profile, --environment, and --safety-level no longer affect the new pipeline flow, so the command surface and runtime behavior are out of sync.
Also applies to: 378-404
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/refactor.py` around lines 306 - 312, The new session-aware
branch constructs RefactronPipeline using _target_path/_project_root and returns
early, which bypasses _load_config(), the safety_level mapping, and flags like
--config, --profile, and --environment; update the flow so that before creating
or returning the RefactronPipeline instance (_pipeline) you call _load_config()
and apply the resolved configuration and safety_level (or pass them into
RefactronPipeline constructor) so the session-aware path honors --config,
--profile, --environment and --safety-level exactly like the original branch
did.
| _pipeline.apply( | ||
| _pipeline_session, | ||
| dry_run=dry_run, | ||
| verify=verify, | ||
| ) |
There was a problem hiding this comment.
autofix now applies by default even though --preview is still the default.
This branch ignores preview and passes only dry_run into RefactronPipeline.apply(). With the current defaults (preview=True, dry_run=False), refactron autofix writes changes immediately instead of previewing them.
Proposed fix
- _pipeline.apply(
+ _effective_dry_run = dry_run or preview
+
+ _pipeline.apply(
_pipeline_session,
- dry_run=dry_run,
+ dry_run=_effective_dry_run,
verify=verify,
)
@@
- if dry_run:
+ if _effective_dry_run:
console.print("\n[bold]Dry-run complete[/bold]")Based on learnings, All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback.
Also applies to: 358-366
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/refactor.py` around lines 348 - 352, The call to
_pipeline.apply(...) is missing the preview argument so the CLI's preview flag
is ignored (causing autofix to write changes); update the two places where
_pipeline.apply is invoked (the call using _pipeline_session and the similar
block further down) to pass preview=preview along with dry_run=dry_run and
verify=verify so the RefactronPipeline.apply(preview, dry_run, verify, ...)
safety-first flow (preview → backup → apply → optional rollback) is preserved.
| _resolved_id = pipeline_session_id or _store.get_current_id() | ||
|
|
There was a problem hiding this comment.
Don't clear/close the session unless rollback fully succeeded.
BackupManager.rollback_session() can return failed restores, but this branch still marks the session ROLLED_BACK and unconditionally deletes .refactron/current. That misreports partial rollbacks, and it also clears the active session even when the user explicitly rolled back some other session.
Also applies to: 509-513
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/refactor.py` around lines 494 - 495, The current code
unconditionally marks a session ROLLED_BACK and removes the active session file
after calling BackupManager.rollback_session(), which can return partial
failures; change the logic in the block around _resolved_id (and the similar
block at lines ~509-513) to inspect the result of
BackupManager.rollback_session() and only (1) set the session state to
ROLLED_BACK and (2) delete .refactron/current when rollback_session reports all
restores succeeded; if rollback_session reports any failures, leave the session
state unchanged (or set an explicit PARTIAL_ROLLBACK status if available) and do
not remove .refactron/current unless _resolved_id matches the currently active
session and the rollback fully succeeded; reference functions/vars:
BackupManager.rollback_session(), _resolved_id, pipeline_session_id,
_store.get_current_id(), and the ROLLED_BACK state to locate and update the code
paths.
Previously `analyze` only queued issues when `--fix-on` was passed, leaving the session empty and causing `autofix --dry-run` to silently report "Dry-run complete" with nothing shown. Now: - `analyze` always queues all discovered issues into the session - `autofix --fix-on CRITICAL` (default) filters which queued items actually get applied, marking the rest as skipped - `autofix` shows a clear message when no items match the threshold and suggests `--fix-on WARNING` as next step - Dry-run output shows the count and a helpful fallback message Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously the output was misleading - "0 items queued" and "No fixable issues" with no explanation. Now: - Shows: N issues · M have automated fixers · K no fixer available - When M=0, explains which issue types CAN be auto-fixed and why complexity/code-smell issues require manual refactoring - Removes redundant "Fixing issues at X+ level (N items)" noise line Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (8)
refactron/cli/refactor.py (5)
411-415:⚠️ Potential issue | 🟠 MajorUse
--pipeline-sessionin the undo hint.The printed command still uses
--session, which routes users into the legacy backup-session flow instead of the new pipeline rollback path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 411 - 415, The undo hint prints a legacy flag (--session) causing the rollback to use the old backup-session flow; update the printed command in the block that checks `_applied > 0` (where `console.print` references `_pipeline_session.session_id`) to use `--pipeline-session` instead of `--session` so the hint points to the new pipeline rollback path.
547-551:⚠️ Potential issue | 🟠 MajorOnly mark the session rolled back after a complete restore.
BackupManager.rollback_session()reports failed files, but this code always setsROLLED_BACKand clears the current pointer. Partial restores then look successful, and rolling back a different session can clear the active one.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 547 - 551, The code unconditionally marks the pipeline session as rolled back and clears the current pointer after calling _bm.rollback_session(_pipeline_session.backup_session_id); change this to inspect the returned values (_restored_count, _failed) and only set _pipeline_session.state = SessionState.ROLLED_BACK, call _store.save(_pipeline_session), and _store.clear_current() when the rollback completed without failures (e.g., _failed == 0 or similar success condition); otherwise leave the session state unchanged (or set an appropriate FAILED_ROLLBACK state) and persist that outcome so partial restores aren't treated as successful.
383-387:⚠️ Potential issue | 🔴 CriticalDefault
autofixnow applies changes instead of previewing them.The command still defaults to
--preview, but this call only forwardsdry_run. With the defaults (preview=True,dry_run=False),refactron autofixbecomes a real apply.Suggested fix
- _pipeline.apply( + _effective_dry_run = dry_run or preview + + _pipeline.apply( _pipeline_session, - dry_run=dry_run, + dry_run=_effective_dry_run, verify=verify, ) @@ - if dry_run: + if _effective_dry_run:Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback".
Also applies to: 393-404
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 383 - 387, The apply call on _pipeline is only forwarding dry_run and verify, so the preview flag is ignored and running the command with default preview=True still performs an actual apply; update the _pipeline.apply invocation(s) (reference: _pipeline.apply and _pipeline_session) to also accept and forward the preview/autofix flag (e.g., preview=preview or preview=not autofix depending on variable names) so the pipeline honors preview mode, and make the same change for the other similar call block around the _pipeline.apply at the 393-404 region.
318-320:⚠️ Potential issue | 🟠 MajorResolve sessions and backups from the pipeline's project root.
autofixroots the store offPath(target).parent, whilerollbackhard-codesPath.cwd()for bothSessionStoreandBackupManager. Resuming or rolling back the same pipeline session from a different directory will miss the saved session/backups or hit the wrong workspace.Also applies to: 529-547
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 318 - 320, The pipeline project root is computed into _project_root and passed to RefactronPipeline(project_root=_project_root) but SessionStore and BackupManager are still rooted off Path.cwd() or Path(target).parent in other flows (autofix, rollback), causing resumed sessions/backups to be looked up in the wrong workspace; update the code that constructs SessionStore and BackupManager (and any uses in autofix and rollback handlers) to derive their base directory from the pipeline's project_root (e.g., use RefactronPipeline.project_root or the local _project_root variable) instead of hard-coding Path.cwd() or Path(target).parent—also apply the same change to the other occurrence noted (lines 529-547) so all session/backup operations consistently use the pipeline project root.
315-320:⚠️ Potential issue | 🟠 MajorThe session-aware path bypasses the command's config and safety flags.
This branch returns at Line 416 without ever calling
_load_config()or mapping--safety-level, andRefactronPipelineis instantiated with its defaultFixRiskLevel.SAFE.--config,--profile,--environment, and--safety-leveltherefore stop affectingautofixonce the session-based flow is used.Also applies to: 337-343, 383-416
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/refactor.py` around lines 315 - 320, The session-aware branch currently constructs RefactronPipeline without loading command config or applying CLI flags, so call _load_config() before returning and map CLI options (--config, --profile, --environment, --safety-level) into the session flow; ensure RefactronPipeline is instantiated with the selected risk level (pass the mapped FixRiskLevel value instead of relying on the default) and preserve any config-derived settings when creating _pipeline using _project_root/_target_path so the session path respects config and safety flags.refactron/core/pipeline.py (2)
140-178:⚠️ Potential issue | 🔴 CriticalTake and validate real backups before any non-dry-run write.
This branch only creates a backup session id; it never copies the target files before
fix_file(..., dry_run=False)can overwrite them. That leavessession.backup_session_idpointing at a rollback path that may have nothing restorable. Back up each target first, then callBackupManager.validate_backup_integrity(session.backup_session_id)and block applies if any path is corrupt.Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback". As per coding guidelines, "
refactron/core/*.py: Use BackupManager.validate_backup_integrity(session_id) for backup integrity validation, returning (valid_paths, corrupt_paths)`."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/core/pipeline.py` around lines 140 - 178, The code creates a backup session but never actually copies files before calling fix_engine.fix_file, so create per-file backups immediately after obtaining backup_session_id (use backup_manager.create_backup_session and backup_manager.backup_dir) and before calling fix_engine.fix_file; then call BackupManager.validate_backup_integrity(session.backup_session_id) to get (valid_paths, corrupt_paths) and if any corrupt_paths exist mark the corresponding FixItem(s) as BLOCKED with block_reason and add them to session.blocked_fixes, only proceeding to call fix_engine.fix_file for files whose backups validated; ensure session.backup_session_id remains set and item.backup_path points to backup_manager.backup_dir / backup_session_id / file_path.name for applied items.
97-123:⚠️ Potential issue | 🟠 MajorKeep the original issue payload all the way through
apply().
queue_issues()only persists a reduced shape,_find_fixer_name()probes fixers against"x = 1\n", and_queue_items_to_issues()later recreates everything asIssueCategory.COMPLEXITY. That meansfix_file()sees a different issue than the analyzer produced, so items can be queued as fixable and then skipped or misapplied when you actually apply them.Also applies to: 156-160, 189-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/core/pipeline.py` around lines 97 - 123, queue_issues() is truncating and altering CodeIssue data (storing only a reduced FixQueueItem and probing fixers with a dummy "x = 1\n" in _find_fixer_name()), causing mismatches when _queue_items_to_issues() reconstructs issues and apply() sees different payloads; fix by persisting the full original issue payload (serialize the entire CodeIssue) into the queue item instead of losing fields, change _find_fixer_name() to probe fixers using the actual issue source/content (or skip probing and record candidate fixer names without mutating payload), and ensure _queue_items_to_issues() deserializes and restores the original IssueCategory and all fields so apply(), FixEngine.fixers, and FixStatus logic operate on the exact analyzer-produced CodeIssue throughout.refactron/cli/analysis.py (1)
287-289:⚠️ Potential issue | 🟠 MajorPersist the session under the actual project root.
Using
Path(target)/.parenthere makes.refactronland under the selected subdirectory or file parent instead of the repository root. Commands run later from the workspace root then miss the current session.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@refactron/cli/analysis.py` around lines 287 - 289, The current logic sets _project_root from Path(target) or its parent which places the session (.refactron) under a selected subdirectory; update this to resolve and use the repository root instead so sessions persist at the real project root. Replace the _project_root computation used to construct RefactronPipeline(project_root=_project_root) with code that finds the repo root (e.g., walk parents from Path(target).resolve() or call git to get the top-level) and set _project_root to that repository root before instantiating RefactronPipeline; reference and update the symbols _target_path, _project_root, and the RefactronPipeline instantiation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@refactron/cli/analysis.py`:
- Around line 292-304: When constructing the PipelineSession instance by hand,
explicitly set its lifecycle/status to SessionState.ANALYZED (or the dataclass
field name used for lifecycle, e.g., "status" or "state") so it matches
RefactronPipeline.analyze() behavior; update the PipelineSession(...) call in
analysis.py to include the lifecycle field with SessionState.ANALYZED and import
or reference SessionState where used (ensure you use the same enum member name
as in RefactronPipeline.analyze to keep readers and status checks consistent).
- Around line 100-106: The --fix-on flag is parsed but never used when saving
the queued session; modify the call to queue_issues so it respects the parsed
fix_on threshold by passing the computed minimum severity (from _FIX_LEVEL_MAP
using the fix_on value, e.g., _FIX_LEVEL_MAP[fix_on.upper()] when fix_on is not
None) as the min_level argument to queue_issues(_pipeline_session, _all_issues,
min_level=...), and ensure the same change is applied to the other code path(s)
mentioned (lines ~280-309) where queue_issues is invoked so the saved session
actually filters issues by the user-specified level.
In `@refactron/cli/refactor.py`:
- Around line 529-559: The rollback branch currently triggers as soon as
_resolved_id is set (via pipeline_session_id or SessionStore.get_current_id()),
which runs before handling explicit rollback modes like --list, --clear,
--use-git or legacy positional IDs; move or guard this logic so it only executes
when the user actually requested a rollback action (e.g., when a specific
rollback flag/mode is set), not merely when an active session exists: modify the
control flow around _resolved_id, SessionStore, and the block that uses
BackupManager.rollback_session so that flag checks for
list/clear/use-git/positional-id are evaluated first and only when none apply
and the explicit rollback mode is present do you load the session, validate
backup_session_id, prompt for confirmation, and call
BackupManager.rollback_session; ensure functions/classes referenced are
SessionStore, _resolved_id/pipeline_session_id, BackupManager.rollback_session,
and _pipeline_session are used in the guarded branch.
- Around line 354-364: The current loop mutates stored session items (setting
_item.status = _FixStatus.SKIPPED) which persists the --fix-on filter; instead
leave _pipeline_session.fix_queue unchanged and apply the threshold only
transiently. Replace the in-place mutation with logic that computes a local view
or predicates: use _LEVEL_RANK, _threshold and _FixStatus to count pending items
whose level meets the threshold (e.g., count items where item.status ==
_FixStatus.PENDING and _LEVEL_RANK.get(item.level.upper(),0) >= _threshold) and
pass that filtered/local view into the apply step or consumer; do not write back
SKIPPED into _pipeline_session.fix_queue. Ensure all references to
_pending_count and any downstream use operate on the transient/filtered view
rather than mutated stored items.
In `@refactron/core/pipeline.py`:
- Around line 12-24: The import block in refactron/core/pipeline.py is not
sorted per repository conventions; re-run isort (profile='black') on this module
to reorder the imports (the block importing AutoFixEngine, FixRiskLevel,
BackupManager, CodeIssue, IssueCategory, IssueLevel, FixQueueItem, FixStatus,
PipelineSession, SessionState, SessionStore, RefactronConfig, and Refactron) so
they follow the project's isort/black profile ordering and satisfy pre-commit.
---
Duplicate comments:
In `@refactron/cli/analysis.py`:
- Around line 287-289: The current logic sets _project_root from Path(target) or
its parent which places the session (.refactron) under a selected subdirectory;
update this to resolve and use the repository root instead so sessions persist
at the real project root. Replace the _project_root computation used to
construct RefactronPipeline(project_root=_project_root) with code that finds the
repo root (e.g., walk parents from Path(target).resolve() or call git to get the
top-level) and set _project_root to that repository root before instantiating
RefactronPipeline; reference and update the symbols _target_path, _project_root,
and the RefactronPipeline instantiation.
In `@refactron/cli/refactor.py`:
- Around line 411-415: The undo hint prints a legacy flag (--session) causing
the rollback to use the old backup-session flow; update the printed command in
the block that checks `_applied > 0` (where `console.print` references
`_pipeline_session.session_id`) to use `--pipeline-session` instead of
`--session` so the hint points to the new pipeline rollback path.
- Around line 547-551: The code unconditionally marks the pipeline session as
rolled back and clears the current pointer after calling
_bm.rollback_session(_pipeline_session.backup_session_id); change this to
inspect the returned values (_restored_count, _failed) and only set
_pipeline_session.state = SessionState.ROLLED_BACK, call
_store.save(_pipeline_session), and _store.clear_current() when the rollback
completed without failures (e.g., _failed == 0 or similar success condition);
otherwise leave the session state unchanged (or set an appropriate
FAILED_ROLLBACK state) and persist that outcome so partial restores aren't
treated as successful.
- Around line 383-387: The apply call on _pipeline is only forwarding dry_run
and verify, so the preview flag is ignored and running the command with default
preview=True still performs an actual apply; update the _pipeline.apply
invocation(s) (reference: _pipeline.apply and _pipeline_session) to also accept
and forward the preview/autofix flag (e.g., preview=preview or preview=not
autofix depending on variable names) so the pipeline honors preview mode, and
make the same change for the other similar call block around the _pipeline.apply
at the 393-404 region.
- Around line 318-320: The pipeline project root is computed into _project_root
and passed to RefactronPipeline(project_root=_project_root) but SessionStore and
BackupManager are still rooted off Path.cwd() or Path(target).parent in other
flows (autofix, rollback), causing resumed sessions/backups to be looked up in
the wrong workspace; update the code that constructs SessionStore and
BackupManager (and any uses in autofix and rollback handlers) to derive their
base directory from the pipeline's project_root (e.g., use
RefactronPipeline.project_root or the local _project_root variable) instead of
hard-coding Path.cwd() or Path(target).parent—also apply the same change to the
other occurrence noted (lines 529-547) so all session/backup operations
consistently use the pipeline project root.
- Around line 315-320: The session-aware branch currently constructs
RefactronPipeline without loading command config or applying CLI flags, so call
_load_config() before returning and map CLI options (--config, --profile,
--environment, --safety-level) into the session flow; ensure RefactronPipeline
is instantiated with the selected risk level (pass the mapped FixRiskLevel value
instead of relying on the default) and preserve any config-derived settings when
creating _pipeline using _project_root/_target_path so the session path respects
config and safety flags.
In `@refactron/core/pipeline.py`:
- Around line 140-178: The code creates a backup session but never actually
copies files before calling fix_engine.fix_file, so create per-file backups
immediately after obtaining backup_session_id (use
backup_manager.create_backup_session and backup_manager.backup_dir) and before
calling fix_engine.fix_file; then call
BackupManager.validate_backup_integrity(session.backup_session_id) to get
(valid_paths, corrupt_paths) and if any corrupt_paths exist mark the
corresponding FixItem(s) as BLOCKED with block_reason and add them to
session.blocked_fixes, only proceeding to call fix_engine.fix_file for files
whose backups validated; ensure session.backup_session_id remains set and
item.backup_path points to backup_manager.backup_dir / backup_session_id /
file_path.name for applied items.
- Around line 97-123: queue_issues() is truncating and altering CodeIssue data
(storing only a reduced FixQueueItem and probing fixers with a dummy "x = 1\n"
in _find_fixer_name()), causing mismatches when _queue_items_to_issues()
reconstructs issues and apply() sees different payloads; fix by persisting the
full original issue payload (serialize the entire CodeIssue) into the queue item
instead of losing fields, change _find_fixer_name() to probe fixers using the
actual issue source/content (or skip probing and record candidate fixer names
without mutating payload), and ensure _queue_items_to_issues() deserializes and
restores the original IssueCategory and all fields so apply(), FixEngine.fixers,
and FixStatus logic operate on the exact analyzer-produced CodeIssue throughout.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 26058cde-73be-4781-945b-4ccad693df62
📒 Files selected for processing (3)
refactron/cli/analysis.pyrefactron/cli/refactor.pyrefactron/core/pipeline.py
| @click.option( | ||
| "--fix-on", | ||
| "fix_on", | ||
| type=click.Choice(["CRITICAL", "ERROR", "WARNING", "INFO"], case_sensitive=False), | ||
| default=None, | ||
| help="Auto-queue issues at this level and above for fixing after analysis.", | ||
| ) |
There was a problem hiding this comment.
--fix-on is exposed but never affects the queued session.
The flag is parsed and _FIX_LEVEL_MAP is built, but this path still calls queue_issues(_pipeline_session, _all_issues) with no min_level. The saved session therefore ignores the user's threshold even though the help text says otherwise.
Also applies to: 280-309
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/analysis.py` around lines 100 - 106, The --fix-on flag is
parsed but never used when saving the queued session; modify the call to
queue_issues so it respects the parsed fix_on threshold by passing the computed
minimum severity (from _FIX_LEVEL_MAP using the fix_on value, e.g.,
_FIX_LEVEL_MAP[fix_on.upper()] when fix_on is not None) as the min_level
argument to queue_issues(_pipeline_session, _all_issues, min_level=...), and
ensure the same change is applied to the other code path(s) mentioned (lines
~280-309) where queue_issues is invoked so the saved session actually filters
issues by the user-specified level.
| _pipeline_session = PipelineSession( | ||
| session_id=_session_id, | ||
| target=str(_target_path), | ||
| created_at=datetime.now(timezone.utc).isoformat(), | ||
| total_files=summary.get("total_files", 0), | ||
| total_issues=summary.get("total_issues", 0), | ||
| issues_by_level={ | ||
| "CRITICAL": summary.get("critical", 0), | ||
| "ERROR": summary.get("errors", 0), | ||
| "WARNING": summary.get("warnings", 0), | ||
| "INFO": summary.get("info", 0), | ||
| }, | ||
| ) |
There was a problem hiding this comment.
Set the lifecycle state explicitly when creating the session by hand.
RefactronPipeline.analyze() marks new sessions as SessionState.ANALYZED, but this manual construction relies on the dataclass default instead. A freshly analyzed session can then look pending to status and other session readers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/analysis.py` around lines 292 - 304, When constructing the
PipelineSession instance by hand, explicitly set its lifecycle/status to
SessionState.ANALYZED (or the dataclass field name used for lifecycle, e.g.,
"status" or "state") so it matches RefactronPipeline.analyze() behavior; update
the PipelineSession(...) call in analysis.py to include the lifecycle field with
SessionState.ANALYZED and import or reference SessionState where used (ensure
you use the same enum member name as in RefactronPipeline.analyze to keep
readers and status checks consistent).
| # Filter queue by --fix-on level: mark items below threshold as skipped | ||
| _LEVEL_RANK = {"INFO": 0, "WARNING": 1, "ERROR": 2, "CRITICAL": 3} | ||
| _threshold = _LEVEL_RANK.get(fix_on.upper(), 3) | ||
| from refactron.core.pipeline_session import FixStatus as _FixStatus | ||
|
|
||
| for _item in _pipeline_session.fix_queue: | ||
| if _item.status == _FixStatus.PENDING: | ||
| if _LEVEL_RANK.get(_item.level.upper(), 0) < _threshold: | ||
| _item.status = _FixStatus.SKIPPED | ||
|
|
||
| _pending_count = len([i for i in _pipeline_session.fix_queue if i.status == _FixStatus.PENDING]) |
There was a problem hiding this comment.
Don't persist --fix-on filtering into the stored queue.
This mutates queued PENDING items to SKIPPED before apply. After one CRITICAL-only run, a later WARNING run against the same session never sees those lower-severity items again.
🧰 Tools
🪛 GitHub Actions: Pre-commit
[error] black failed: files were modified by this hook (reformatted refactron/cli/refactor.py). Re-run pre-commit to apply changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/refactor.py` around lines 354 - 364, The current loop mutates
stored session items (setting _item.status = _FixStatus.SKIPPED) which persists
the --fix-on filter; instead leave _pipeline_session.fix_queue unchanged and
apply the threshold only transiently. Replace the in-place mutation with logic
that computes a local view or predicates: use _LEVEL_RANK, _threshold and
_FixStatus to count pending items whose level meets the threshold (e.g., count
items where item.status == _FixStatus.PENDING and
_LEVEL_RANK.get(item.level.upper(),0) >= _threshold) and pass that
filtered/local view into the apply step or consumer; do not write back SKIPPED
into _pipeline_session.fix_queue. Ensure all references to _pending_count and
any downstream use operate on the transient/filtered view rather than mutated
stored items.
| _store = SessionStore(root_dir=Path.cwd()) | ||
|
|
||
| # Use explicit ID, else fall back to active workspace session | ||
| _resolved_id = pipeline_session_id or _store.get_current_id() | ||
|
|
||
| if _resolved_id: | ||
| _pipeline_session = _store.load(_resolved_id) | ||
| if _pipeline_session is None: | ||
| console.print(f"[red]Session not found: {_resolved_id}[/red]") | ||
| raise SystemExit(1) | ||
| if not _pipeline_session.applied_fixes: | ||
| console.print("[yellow]No applied fixes in this session to roll back.[/yellow]") | ||
| return | ||
| if not _pipeline_session.backup_session_id: | ||
| console.print("[red]Session has no backup ID — cannot roll back.[/red]") | ||
| raise SystemExit(1) | ||
|
|
||
| _bm = BackupManager(root_dir=Path.cwd()) | ||
| _restored_count, _failed = _bm.rollback_session(_pipeline_session.backup_session_id) | ||
|
|
||
| _pipeline_session.state = SessionState.ROLLED_BACK | ||
| _store.save(_pipeline_session) | ||
| _store.clear_current() | ||
|
|
||
| console.print( | ||
| f"[green]Rolled back {_restored_count} file(s) from session " | ||
| f"{_resolved_id}[/green]" | ||
| ) | ||
| for _f in _failed: | ||
| console.print(f"[red] Failed to restore: {_f}[/red]") | ||
| return |
There was a problem hiding this comment.
Don't auto-execute pipeline rollback before honoring the explicit rollback mode.
Because _resolved_id is handled before --list, --clear, --use-git, or the legacy positional session id are checked, any active current pipeline session makes this branch run immediately. refactron rollback --list can restore files instead of listing them, and the confirmation prompt is skipped entirely.
🧰 Tools
🪛 GitHub Actions: Pre-commit
[error] black failed: files were modified by this hook (reformatted refactron/cli/refactor.py). Re-run pre-commit to apply changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@refactron/cli/refactor.py` around lines 529 - 559, The rollback branch
currently triggers as soon as _resolved_id is set (via pipeline_session_id or
SessionStore.get_current_id()), which runs before handling explicit rollback
modes like --list, --clear, --use-git or legacy positional IDs; move or guard
this logic so it only executes when the user actually requested a rollback
action (e.g., when a specific rollback flag/mode is set), not merely when an
active session exists: modify the control flow around _resolved_id,
SessionStore, and the block that uses BackupManager.rollback_session so that
flag checks for list/clear/use-git/positional-id are evaluated first and only
when none apply and the explicit rollback mode is present do you load the
session, validate backup_session_id, prompt for confirmation, and call
BackupManager.rollback_session; ensure functions/classes referenced are
SessionStore, _resolved_id/pipeline_session_id, BackupManager.rollback_session,
and _pipeline_session are used in the guarded branch.
Summary by CodeRabbit
New Features
Tests